-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[12] fix moving folders out of a cache jail #5655
Conversation
@icewind1991, thanks for your PR! By analyzing the history of the files in this pull request, we identified @rullzer, @nickvergessen and @LukasReschke to be potential reviewers. |
Codecov Report
@@ Coverage Diff @@
## stable12 #5655 +/- ##
==============================================
+ Coverage 54.09% 54.12% +0.02%
- Complexity 22408 22422 +14
==============================================
Files 1379 1380 +1
Lines 85706 85767 +61
Branches 1329 1329
==============================================
+ Hits 46365 46422 +57
- Misses 39341 39345 +4
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works 👍
Do we have some idea about the performance here if you have like 20 million files in the file cache? What is the execution time here? If this would take ages we may need to have a repair step if someone enters a folder or so. |
@LukasReschke well it should only happen on files that got corrupted. And if you have 20k corrupted files then we still should fix them all... From my POV a PR that fixes it properly on update is way better than all kinds of dark magic when entering a folder... because that will basically require extra queries on each access... |
)) | ||
->where($builder->expr()->neq('f.path', $computedPath)); | ||
|
||
return $query->execute()->fetchAll(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't this be way to huge if you have some big corrupted shares for example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, it should be chunked somehow... Otherwise the MySQL server will go away
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switched to a generator that fetches rows seperate
private function update($fileid, $newPath) { | ||
$builder = $this->connection->getQueryBuilder(); | ||
|
||
$query = $builder->update('filecache') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please create this only once and just set the parameters afterwards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
private function getId($storage, $path) { | ||
$builder = $this->connection->getQueryBuilder(); | ||
|
||
$query = $builder->select('fileid') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please create this only once and just set the parameters afterwards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I agree that that would be the best thing to do in theory, it adds to much code complexity for the expected gain in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How I see it is that we execute this like 100.000 times on our instance. Not sure if that is "negligible"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
private function reparent($from, $to) { | ||
$builder = $this->connection->getQueryBuilder(); | ||
|
||
$query = $builder->update('filecache') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please create this only once and just set the parameters afterwards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
private function delete($fileid) { | ||
$builder = $this->connection->getQueryBuilder(); | ||
|
||
$query = $builder->delete('filecache') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please create this only once and just set the parameters afterwards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
->where($builder->expr()->neq('f.path', $computedPath)); | ||
|
||
$result = $query->execute(); | ||
while ($row = $result->fetch()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this still selects potentially 100k entries or more in one query and kills the database server.
It's not php having the problem. Please just use a limit and loop this method until the result set is empty....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
Codecov Report
@@ Coverage Diff @@
## stable12 #5655 +/- ##
==============================================
+ Coverage 54.09% 54.13% +0.03%
- Complexity 22413 22433 +20
==============================================
Files 1379 1380 +1
Lines 85719 85802 +83
Branches 1329 1329
==============================================
+ Hits 46368 46445 +77
- Misses 39351 39357 +6
|
@icewind1991 Please rebase to fix the autoloader failure. |
Signed-off-by: Robin Appelman <robin@icewind.nl>
Signed-off-by: Robin Appelman <robin@icewind.nl>
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
Signed-off-by: Robin Appelman <robin@icewind.nl>
Signed-off-by: Robin Appelman <robin@icewind.nl>
Signed-off-by: Robin Appelman <robin@icewind.nl>
f7a901d
to
069df4c
Compare
Done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Lets do this!
Ported the added fixes to master: #5715 |
Backport of #5424